Associate Professor | School of Information Science
What is the difference between a conceptual definition and an operational definition?
These are not interchangeable
What is the difference between an independent and a dependent variable?
“variables thought to influence changes in another variable (the dependent variable).”
Known as the IV (sometimes called explanatory variable or PREDICTOR variable in non-experimental research)
When more than two independent variables are used in a “factorial design” the IVs are referred to as factors.
“variables are thought to be changed by another variable (the independent variable).”
Know as the DV (sometimes called outcome variable or CRITERION variable in non-experimental research)
What is the difference between experimental research and survey research?
Why is quantitative research important to the communication discipline?
Among other reasons…
What are the two characteristics used to evaluate internal validity?
. . .
Internal validity refers to the extent that the independent variable, treatment, or intervention caused the change in the dependent variable. We evaluate it based on…
What are the broad threats to internal validity?
. . .
What are the characteristics of a sample frame researchers should evaluate in determining its usefulness?
. . .
The sampling frame represents an exhaustive list of the participants that a researcher could realistically access for a study.
What are the two types of external validity?
External validity is the extent to which samples, settings, and variables can be generalized beyond the study.
Whether the conditions, settings, times, testers, or procedures are rep of natural conditions and so forth and, thus, whether results can be generalized to real life outcomes.
AKA is the research environment similar to the natural environment? Does the manipulation of the IV feel real to the participants?
Let’s say your group wants to study how students’ public speaking anxiety influences grades on a speech.
Listed below are some differences among the five approaches to research. Match the description (A–E) that best fits the type of approach (a–e).
A specific research design helps us visualize the independent variables of the study, the levels within these independent variables, and when measurement of the dependent variable will take place.
You are a researcher in science education who is interested in the role of diagrams in instruction. You wish to investigate whether using diagrams in place of text will facilitate comprehension of the principles and concepts taught. To do so, you have developed a 12th-grade physics unit that incorporates the liberal use of diagrams. You plan to compare students’ knowledge of physics before and after the instructional unit. You will teach one of your classes using the diagram unit and the other using the text-only unit.
Pretest–posttest nonequivalent comparison group design
Moderate strength quasi-experimental: assigning treatments nonrandomly to groups that are probably similar
Establish baseline, give intervention, wait to level off, intervene again! AB -> AB
Using very few participants increases the flexibility of the design and leads to completely different methods of data analysis. These single-subject designs use numerous repeated measures on each participant and the initiation and withdrawal of treatment.
There is no active IV and the researcher does not control the IV.
The IV is measured instead.
Can you provide an example of each?
The goal of our lesson on measuring the DV is for you to leave class with a basic understanding of the differences between conceptual and operational approaches, as well as the general steps in developing a measure to adequately assess a variable of interest.
Power is the probability of detecting an effect, given that the effect is really there.
In other words, it is the probability of rejecting the null hypothesis when it is in fact false.
For example, let’s say that we have a simple study with drug A and a placebo (control) group, and that the drug truly is effective; the power is the probability of finding a difference between the two groups. So, imagine that we had a power of .8 and that this simple study was conducted many times. Having power of .8 means that 80% of the time, we would get a statistically significant difference between the drug A and placebo groups. This also means that 20% of the times that we run this experiment, we will not obtain a statistically significant effect between the two groups, even though there really is an effect in reality.
Effect Size: A numerical value representing the strength of the relationship between the IV and DV
Type I error: Occurs when the null hypothesis is true (in other words, there really is no effect), but you reject the null hypothesis
Type II error: A Type II error occurs when the alternative hypothesis is correct, but you fail to reject the null hypothesis (in other words, there really is an effect, but you failed to detect it)
The probability of a Type I error (reject a true null).
For a test with a level of significance of 0.05 = 1/20, a true null hypothesis will be rejected one out of every 20 times.
We are willing to live with a 5% chance that we will conclude that there is a difference when there really isn’t (we are 95% confident).
The probability of a Type II error (fail to reject a false null)
The probability that we would accept the null hypothesis even if the alternative hypothesis is actually true
If power is .80 or 80%, then beta is .2 or 20%
If we want to avoid false positives (Type I), then we raise our confidence level (alpha)
BUT, the more stringent we are at avoiding false positives, the more we increase the probability of a false negative
Think about how you are going to operationalize your IV (manipulate it, measure it continuously, observe it, etc.). Let’s sketch out the design of your study (e.g., experiment vs. survey), identifying key elements like the sample, procedure, survey flow, and manipulation.